23 research outputs found

    Measuring semantic distance for linked open data-enabled recommender systems

    Get PDF
    The Linked Open Data (LOD) initiative has been quite successful in terms of publishing and interlinking data on the Web. On top of the huge amount of interconnected data, measuring relatedness between resources and identifying their relatedness could be used for various applications such as LOD-enabled recommender systems. In this paper, we propose various distance measures, on top of the basic concept of Linked Data Semantic Distance (LDSD), for calculating Linked Data semantic distance between resources that can be used in a LOD-enabled recommender system. We evaluated the distance measures in the context of a recommender system that provides the top-N recommendations with baseline methods such as LDSD. Results show that the performance is significantly improved by our proposed distance measures incorporating normalizations that use both of the resources and global appearances of paths in a graph

    Exploring dynamics and semantics of user interests for user modeling on Twitter for link recommendations

    Get PDF
    User modeling for individual users on the Social Web plays an important role and is a fundamental step for personalization as well as recommendations. Recent studies have proposed different user modeling strategies considering various dimensions such as temporal dynamics and semantics of user interests. Although previous work proposed different user modeling strategies considering the temporal dynamics of user interests, there is a lack of comparative studies on those methods and therefore the comparative performance over each other is unknown. In terms of semantics of user interests, background knowledge from DBpedia has been explored to enrich user interest profiles so as to reveal more information about users. However, it is still unclear to what extent different types of information from DBpedia contribute to the enrichment of user interest profiles. In this paper, we propose user modeling strategies which use Concept Frequency - Inverse Document Frequency (CF-IDF) as a weighting scheme and incorporate either or both of the dynamics and semantics of user interests. To this end, we first provide a comparative study on different user modeling strategies considering the dynamics of user interests in previous literature to present their comparative performance. In addition, we investigate different types of information (i.e., categories, classes and connected entities via various properties) for entities from DBpedia and the combination of them for extending user interest profiles. Finally, we build our user modeling strategies incorporating either or both of the best performing methods in each dimension. Results show that our strategies outperform two baseline strategies significantly in the context of link recommendations on Twitter

    Analyzing MOOC Entries of Professionals on LinkedIn for User Modeling and Personalized MOOC Recommendations

    Get PDF
    The main contribution of this work is the comparison of three user modeling strategies based on job titles, educational fields and skills in LinkedIn profiles, for personalized MOOC recommendations in a cold start situation. Results show that the skill-based user modeling strategy performs best, followed by the job- and edu-based strategies

    Inferring user interests in microblogging social networks: a survey

    Get PDF
    With the growing popularity of microblogging services such as Twitter in recent years, an increasing number of users are using these services in their daily lives. The huge volume of information generated by users raises new opportunities in various applications and areas. Inferring user interests plays a significant role in providing personalized recommendations on microblogging services, and also on third-party applications providing social logins via these services, especially in cold-start situations. In this survey, we review user modeling strategies with respect to inferring user interests from previous studies. To this end, we focus on four dimensions of inferring user interest profiles: (1) data collection, (2) representation of user interest profiles, (3) construction and enhancement of user interest profiles, and (4) the evaluation of the constructed profiles. Through this survey, we aim to provide an overview of state-of-the-art user modeling strategies for inferring user interest profiles on microblogging social networks with respect to the four dimensions. For each dimension, we review and summarize previous studies based on specified criteria. Finally, we discuss some challenges and opportunities for future work in this research domain

    Env2Vec: accelerating VNF testing with deep learning

    Get PDF
    The adoption of fast-paced practices for developing virtual network functions (VNFs) allows for continuous software delivery and creates a market advantage for network operators. This adoption, however, is problematic for testing engineers that need to assure, in shorter development cycles, certain quality of highly-configurable product releases running on heterogeneous clouds. Machine learning (ML) can accelerate testing workflows by detecting performance issues in new software builds. However, the overhead of maintaining several models for all combinations of build types, network configurations, and other stack parameters, can quickly become prohibitive and make the application of ML infeasible. We propose Env2Vec, a deep learning architecture that combines contextual features with historical resource usage, and characterizes the various stack parameters that influence the test execution within an embedding space, which allows it to generalize model predictions to previously unseen environments. We integrate a single ML model in the testing workflow to automatically debug errors and pinpoint performance bottlenecks. Results obtained with real testing data show an accuracy between 86.2%-100%, while reducing the false alarm rate by 20.9%-38.1% when reporting performance issues compared to state-of-the-art approaches

    Deadline-Aware TDMA Scheduling for Multihop Networks Using Reinforcement Learning

    Get PDF
    Time division multiple access (TDMA) is the medium access control strategy of choice for multihop networks with deterministic delay guarantee requirements. As such, many Internet of Things applications use protocols based on time division multiple access. Optimal slot assignment in such networks is NP-hard when there are strict deadline requirements and is generally done using heuristics that give suboptimal transmission schedules in linear time. However, existing heuristics make a scheduling decision at each time slot based on the same criterion without considering its effect on subsequent network states or scheduling actions. Here, we first identify a set of node features that capture the information necessary for network state representation to aid building schedules using Reinforcement Learning (RL). We then propose three different centralized approaches to RL-based TDMA scheduling that vary in training and network representation methods. Using RL allows applying diverse criteria at different time slots while considering the effect of a scheduling action on meeting the scheduling objective for the entire TDMA frame, resulting in better schedules. We compare the three proposed schemes in terms of how well they meet the scheduling objectives and their applicability to networks with memory and time constraints. One of the schemes proposed is RLSchedule, which is particularly suited to constrained networks. Simulation results for a variety of network scenarios show that RLSchedule reduces the percentage of packets missing deadlines by up to 60% compared to the best available baseline heuristic

    R\'{e}nyi Divergence Deep Mutual Learning

    Full text link
    This paper revisits Deep Mutual Learning (DML), a simple yet effective computing paradigm. We propose using R\'{e}nyi divergence instead of the KL divergence, which is more flexible and tunable, to improve vanilla DML. This modification is able to consistently improve performance over vanilla DML with limited additional complexity. The convergence properties of the proposed paradigm are analyzed theoretically, and Stochastic Gradient Descent with a constant learning rate is shown to converge with O(1)\mathcal{O}(1)-bias in the worst case scenario for nonconvex optimization tasks. That is, learning will reach nearby local optima but continue searching within a bounded scope, which may help mitigate overfitting. Finally, our extensive empirical results demonstrate the advantage of combining DML and R\'{e}nyi divergence, which further improves generalized models
    corecore